skip to main content


Search for: All records

Creators/Authors contains: "Shen, Junjie"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Connected Vehicle (CV) technologies are under rapid deployment across the globe and will soon reshape our transportation systems, bringing benefits to mobility, safety, environment, etc. Meanwhile, such technologies also attract attention from cyberattacks. Recent work shows that CV-based Intelligent Traffic Signal Control Systems are vulnerable to data spoofing attacks, which can cause severe congestion effects in intersections. In this work, we explore a general detection strategy for infrastructure-side CV applications by estimating the trustworthiness of CVs based on readily-available infrastructureside sensors. We implement our detector for the CV-based traffic signal control and evaluate it against two representative congestion attacks. Our evaluation in the industrial-grade traffic simulator shows that the detector can detect attacks with at least 95% true positive rates while keeping false positive rate below 7% and is robust to sensor noises. 
    more » « less
  2. In high-level Autonomous Driving (AD) systems, behavioral planning is in charge of making high-level driving decisions such as cruising and stopping, and thus highly securitycritical. In this work, we perform the first systematic study of semantic security vulnerabilities specific to overly-conservative AD behavioral planning behaviors, i.e., those that can cause failed or significantly-degraded mission performance, which can be critical for AD services such as robo-taxi/delivery. We call them semantic Denial-of-Service (DoS) vulnerabilities, which we envision to be most generally exposed in practical AD systems due to the tendency for conservativeness to avoid safety incidents. To achieve high practicality and realism, we assume that the attacker can only introduce seemingly-benign external physical objects to the driving environment, e.g., off-road dumped cardboard boxes. To systematically discover such vulnerabilities, we design PlanFuzz, a novel dynamic testing approach that addresses various problem-specific design challenges. Specifically, we propose and identify planning invariants as novel testing oracles, and design new input generation to systematically enforce problemspecific constraints for attacker-introduced physical objects. We also design a novel behavioral planning vulnerability distance metric to effectively guide the discovery. We evaluate PlanFuzz on 3 planning implementations from practical open-source AD systems, and find that it can effectively discover 9 previouslyunknown semantic DoS vulnerabilities without false positives. We find all our new designs necessary, as without each design, statistically significant performance drops are generally observed. We further perform exploitation case studies using simulation and real-vehicle traces. We discuss root causes and potential fixes. 
    more » « less
  3. The security of the Autonomous Driving (AD) system has been gaining researchers’ and public’s attention recently. Given that AD companies have invested a huge amount of resources in developing their AD models, e.g., localization models, these models, especially their parameters, are important intellectual property and deserve strong protection. In thiswork,we examine whether the confidentiality of productiongrade Multi-Sensor Fusion (MSF) models, in particular, Error-State Kalman Filter (ESKF), can be stolen from an outside adversary. We propose a new model extraction attack called TaskMaster that can infer the secret ESKF parameters under black-box assumption. In essence, TaskMaster trains a substitutional ESKF model to recover the parameters, by observing the input and output to the targeted AD system. To precisely recover the parameters, we combine a set of techniques, like gradient-based optimization, search-space reduction and multi-stage optimization. The evaluation result on real-world vehicle sensor dataset shows that TaskMaster is practical. For example, with 25 seconds AD sensor data for training, the substitutional ESKF model reaches centimeter-level accuracy, comparing with the ground-truth model. 
    more » « less
  4. Safety and security play critical roles for the success of Autonomous Driving (AD) systems. Since AD systems heavily rely on AI components, the safety and security research of such components has also received great attention in recent years. While it is widely recognized that AI component-level (mis)behavior does not necessarily lead to AD system-level impacts, most of existing work still only adopts component-level evaluation. To fill such critical scientific methodology-level gap from component-level to real system-level impact, a system-driven evaluation platform jointly constructed by the community could be the solution. In this paper, we present PASS (Platform for Auto-driving Safety and Security), a system-driven evaluation prototype based on simulation. By sharing our platform building concept and preliminary efforts, we hope to call on the community to build a uniform and extensible platform to make AI safety and security work sufficiently meaningful at the system level. 
    more » « less
  5. null (Ed.)
  6. The perception module is the key to the security of Autonomous Driving systems. It perceives the environment through sensors to help make safe and correct driving decisions on the road. The localization module is usually considered to be independent of the perception module. However, we discover that the correctness of perception output highly depends on localization due to the widely used Region-of-Interest design adopted in perception. Leveraging this insight, we propose an ROI attack and perform a case study in the traffic light detection in Autonomous Driving systems. We evaluate the ROI attack on a production-grade Autonomous Driving system, named Baidu Apollo, under end-to-end simulation environments. We found our attack is able to make the victim a red light runner or cause denial-of-service with a 100% success rate. 
    more » « less
  7. null (Ed.)
  8. null (Ed.)